22 research outputs found

    Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege

    Get PDF
    Die rasanten Entwicklungen im Bereich der Künstlichen Intelligenz und Robotik stellen nicht nur die Ethik, sondern auch das Recht vor neue Herausforderungen, gerade im Bereich der Medizin und Pflege. Grundsätzlich hat der Einsatz von KI dabei das Potenzial, sowohl die Heilbehandlungen als auch den adäquaten Umgang im Rahmen der Pflege zu erleichtern, wenn nicht sogar zu verbessern. Verwaltungsaufgaben, die Überwachung von Vitalfunktionen und deren Parameter sowie die Untersuchung von Gewebeproben etwa könnten autonom ablaufen. In Diagnostik und Therapie können Systeme die behandelnde Ärztin unterstützen. Intelligente Betten ermöglichen eine Frühmobilisierung der Patient:innen bei gleichzeitig geringerem Personalaufwand. Gleichwohl birgt der Einsatz der Systeme rechtliche Herausforderungen. So besteht das Risiko einer Verletzung der beteiligten Personen. Im Gegensatz zu herkömmlichen Technologien „leidet“ KI unter der „Black-Box-Problematik“: Die von den Systemen generierten Ergebnisse sind nicht mehr vollständig vorhersehbar und nachvollziehbar. Der Einsatz birgt unbekannte und unkalkulierbare Risiken, was sich insbesondere auf die zivilrechtliche Haftung und strafrechtliche Verantwortung auswirkt. Wem die Entscheidungen der Systeme normativ zuzurechnen sind, ist eine Kernfrage des juristischen Diskurses. Die aus praktikabilitätsgründen naheliegende Wahl, dem letztentscheidenden Menschen das Verhalten eines KI-Systems zuzurechnen, überzeugt nicht in allen Fällen, sondern degradiert ihn häufig zum symbolischen Haftungsknecht und legt ihm einseitig die Risiken der Technologien auf. Weiterhin ergeben sich im Bereich der Medizin und Pflege Fragen hinsichtlich der Zulassung von KI-Systemen, da die Maschinen während der Nutzung weiterlernen und so ihren strukturellen Aufbau kontinuierlich verändern. Es ist daher angebracht, sich frühzeitig mit dem Konfliktpotential aus ethischer und rechtlicher Sicht auseinander zu setzen, um einer potenziellen gesellschaftlichen Angst vor derartigen Systemen vorzubeugen und einen praxisgerechten Handlungsrahmen zu schaffen.Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges

    CosmoScout VR: A Modular 3D Solar System Based on SPICE

    Get PDF
    We present CosmoScout VR - a modular 3D Solar System for interactive exploration and presentation of large space mission datasets. This paper describes the overall architecture as well as several core components of the frame-work. To foster the application in various scientific domains, CosmoScout VR employs a plugin-based architecture. This not only reduces development times but also allows scientists to create their own data visualization plugins without having to modify the core source code of CosmoScout VR. One of the most important plugins - level-of-detail terrain rendering - is described in greater detail in this paper. Another key feature of CosmoScout VR is the scene graph which is tightly coupled with NASA's SPICE library to allow for high-precision positioning of celestial objects, such as planets, moons, and spacecrafts. SPICE is also used for the seamless navigation throughout the Solar System in which the user automatically follows the closest body. During navigation, the virtual scene is scaled in such a way, that the closest celestial body is always within arm's reach. This allows for simultaneous exploration of multiple datasets in their spatial context at diverse scales. However, the navigation uses all six degrees of freedom which can induce motion sickness. In this paper, we present some counter measures as well as evaluate their effectiveness in a user study. CosmoScout VR is open source, cross-platform, and while it can run on conventional desktop PCs, it also supports stereoscopic multi-screen systems, such as display walls, DOMEs or CAVEs

    Real-Time Rendering of Eclipses without Incorporation of Atmospheric Effects

    Get PDF
    In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license

    Comparison of Depth Buffer Techniques for Large and Detailed 3D Scenes

    Get PDF
    Large scale 3D scenes in applications like space simulations are often subject to depth buffer related issues and visual artefacts like Z-fighting and spatial jittering. These issues are primarily a result of indistinguishable depth buffer values. To mitigate these issues, many techniques have been developed over time to better distribute depth values over the clipping range. These techniques range from simple adjustments of the projection matrix to complex solutions like multistage rendering with layered depth buffers. This work presents, compares and evaluates commonly used approaches found in iterature and real world applications. An experiment is set up to compare the presented depth buffer techniques using the metric of minimum triangle separation (MTS). The gathered results are presented and evaluated, to give a good overview on which techniques are well suited for the use in applications with large scale 3D scenes

    STRIELAD - A Scalable Toolkit for Real-time Interactive Exploration of Large Atmospheric Datasets

    Get PDF
    Technological advances in high performance computing and maturing physical models allow scientists to simulate weather and climate evolutions with an increasing accuracy. While this improved accuracy allows us to explore complex dynamical interactions within such physical systems, inconceivable a few years ago, it also results in grand challenges regarding the data visualization and analytics process. We present STRIELAD, a scalable weather analytics toolkit, which allows for interactive exploration and real-time visualization of such large scale datasets. It combines parallel and distributed feature extraction using high-performance computing resources with smart level-of-detail rendering methods to assure interactivity during the complete analysis process

    Real-time Interactive Exploration of Large Atmospheric Datasets in Virtual Reality

    Get PDF
    While technological advances in high performance computing allow for an ever-increasing accuracy in climate and weather simulations, they also lead to grand challenges regarding the data visualization and analytics process. We present a visualization framework, which allows for interactive exploration and real-time visualization of such large scale datasets in virtual reality. It combines parallel and distributed feature extraction using high-performance computing resources with octree-based level-of-detail rendering methods to assure high frame rates during the complete analysis process. When parameters such as an iso-value or the current time-step are modified, the visualization is updated progressively in a view-dependent manner. In addition, the data is shown in relation to the geographical, planetary and celestial context: the data is shown as part of our solar system. Planets are rendered with a sophisticated level-of-detail system based on the HEALPix tessellation of spheres. HEALPix tiles have equal areas and do not suffer from singularities at poles which are a common issue with other tessellations. Geographical datasets (e.g. Satellite images, digital elevation data or vector maps) are loaded from Web-Map-Services (WMS). Example datasets for Earth include, but are not limited to, Sentinel images, Open Street Map, TanDEM-X or SRTM30. NASA's SPICE library is used to compute the position of sun, planets, moons and stars

    VR-OOS: The DLR’s Virtual Reality Simulator for Telerobotic On-Orbit Servicing with Haptic Feedback

    Get PDF
    The growth of space debris is becoming a severe issue that urgently requires mitigation measures based on maintenance, repair, and de-orbiting technologies. Such on-orbit servicing (OOS) missions, however, are delicate and expensive. Virtual Reality (VR) enables the simulation and training in a flexible and safe environment, and hence has the potential to drastically reduce costs and time, while increasing the success rate of future OOS missions. This paper presents a highly immersive VR system with which satellite maintenance procedures can be simulated interactively using visual and haptic feedback. The system can be used for verification and training purposes for human and robot systems interacting in space. Our framework combines unique realistic virtual reality simulation engines with advanced immersive interaction devices. The DLR bimanual haptic device HUG is used as the main user interface. The HUG is equipped with two light-weight robot arms and is able to provide realistic haptic feedback on both human arms. Additional devices provide vibrotactile and electrotactile feedback at the elbow and the fingertips. A particularity of the realtime simulation is the fusion of the Bullet physics engine with our haptic rendering algorithm, which is an enhanced version of the Voxmap-Pointshell Algorithm. Our haptic rendering engine supports multiple objects in the scene and is able to compute collisions for each of them within 1 msec, enabling realistic virtual manipulation tasks even for stiff collision configurations. The visualization engine ViSTA is used during the simulation to achieve photo-realistic effects, increasing the immersion. In order to provide a realistic experience at interactive frame rates, we developed a distributed system architecture, where the load of computing the physics simulation, haptic feedback and visualization of a complex scene is transferred to dedicated machines. The implementations are presented in detail and the performance of the overall system is validated. Additionally, a preliminary user study in which the virtual system is compared to a physical test bed shows the suitability of the VR-OOS framework

    Virtual Hydrology Observatory: An Immersive Visualization of Hydrology Modeling

    No full text
    The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVE(TM)-like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience
    corecore